UEFI: Add support for different framebuffer configs on real hardware vs VMs#364
UEFI: Add support for different framebuffer configs on real hardware vs VMs#364kennystrawnmusic wants to merge 23 commits intorust-osdev:mainfrom
Conversation
Co-authored-by: Philipp Oppermann <dev@phil-opp.com>
|
Thanks a lot for this PR! I agree that the current framebuffer configuration is missing some crucial outputs. We currently choose the last mode that matches the So I think that we're missing at least the following config options:
This PR seems to partially address points 2 and 3, but it doesn't allow configuring the new behavior. I think I would prefer a less-opinionated approach that allows users to configure the behavior themselves. So how about the following:
This way, we get a similar behavior by default, but users can adjust the behavior as they like. What do you think? |
Back to the drawing board; going to reconfigure this using that suggestion. Much better I think. |
|
Alright, submitted newer commits that address both of your concerns: |
Changed the title of this pull request to reflect these suggestions |
| if let Some(hypervisor) = CpuId::new().get_hypervisor_info() { | ||
| if let Hypervisor::Xen = hypervisor.identify() { | ||
| // Use same rules as real hardware since Xen uses the whole screen |
There was a problem hiding this comment.
I'm a bit puzzled about what this PR is doing.
I often use VMWare and QEMU/KVM for running guest VMs as I would a host, especially when enabling paravirtualization features for performance similar to host system. These guests may run windowed or fullscreen.
It's not too concerning for me as this is only applicable to a bootloader based on this project, but the motivation seems focused on QEMU windowed for development/testing purposes?
As a user, if I have a VM guest fullscreen on a display(s), I'd find this different implicit behaviour a bit confusing, especially since there's an exception made for Xen.
I don't think you find GRUB, rEFInd, or systemd-boot handling resolution used differently? I believe they have a config setting if you want to explicitly prefer a resolution or scaling/fit?
I'm not that familiar with the project, but is this just a default with a way to opt-out? Is there actual value in a physical + virtual framebuffer structs that this PR introduces for this distinction?
Or would it be better to match what other bootloader/managers do, and just provide a default with build (or runtime) config to have the scaling behaviour you prefer?
You could then more easily build for your VM testing, with a different config for production builds on real hardware (or VM guests). The detection to switch based on environment if desired beyond testing may be better served as an opt-in feature/config?
If you disagree, no worries 👍
Maybe consider documenting the behaviour then so it's easier to troubleshoot when a dev/user encounters it and tries to understand why it scales differently on most hypervisors.
Test pull request to see if this works in my kernel
|
@kennystrawnmusic what exactly happened? I see you added new commits yesterday, then closed this PR with no context? (and my comment was entirely ignored too) Looks like some of the PR was moved to a new PR: #394 |
The other PR was on a completely different branch that doesn't have these changes (only the ones that add in the |
|
I'd still like to discourage enforcing such behaviour on QEMU. If the firmware is deployed to a VM it can be for production usage and utilize the full display. One would expect the resolution to not be messed with due to serving as a convenience during development when a hypervisor detected, it should have an opt-in / opt-out. |
It may look nice to have a slighltly lower resolution when virtualizing the OS you're building, but during real hardware tests it makes far more sense to choose a higher-resolution mode than during QEMU tests. Otherwise, you're left with 2 options:
So, creating this pull request to fix the problem. Using
raw_cpuidmakes it possible to check whether or not you're running inside a hypervisor before setting a display mode, which is important for improving this, and using.max()on the mode iterator when real hardware is detected ensures that the absolute highest resolution possible is used in that case.